219 research outputs found

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts

    Active vision-based localization for robots in a home-tour scenario

    Get PDF
    Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned

    Probabilistic Scene Modeling for Situated Computer Vision

    Get PDF

    Towards Automated Execution and Evaluation of Simulated Prototype HRI Experiments

    Get PDF
    Lier F, Lütkebohle I, Wachsmuth S. Towards Automated Execution and Evaluation of Simulated Prototype HRI Experiments. In: HRI '14 Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. New York, NY, USA: ACM; 2014: 230-231.Autonomous robots are highly relevant targets for interaction studies, but can exhibit behavioral variability that confounds experimental validity. Currently, testing on real systems is the only means to prevent this, but remains very labour-intensive and often happens too late. To improve this situation, we are working towards early testing by means of partial simulation, with automated assessment, and based upon continuous software integration to prevent regressions. We will introduce the concept and describe a proof-of-concept that demonstrates fast feedback and coherent experiment results across repeated trials

    Direct On-Line Imitation of Human Faces with Hierarchical ART Networks

    Get PDF
    Holthaus P, Wachsmuth S. Direct On-Line Imitation of Human Faces with Hierarchical ART Networks. In: 2013 IEEE RO-MAN. The 22nd International Symposium on Robot and Human Interactive Communication. Piscataway, NJ: IEEE; 2013: 370-371.This work-in-progress paper presents an on-line system for robotic heads capable of mimicking humans. The marker-less method solely depends on the interactant’s face as an input and does not use a set of basic emotions and is thus capable of displaying a large variety of facial expressions. A preliminary evaluation assigns solid performance with potential for improvement

    Integration and coordination in a cognitive vision system

    Get PDF
    In this paper, we present a case study that exemplifies general ideas of system integration and coordination. The application field of assistant technology provides an ideal test bed for complex computer vision systems including real-time components, human-computer interaction, dynamic 3-d environments, and information retrieval aspects. In our scenario the user is wearing an augmented reality device that supports her/him in everyday tasks by presenting information that is triggered by perceptual and contextual cues. The system integrates a wide variety of visual functions like localization, object tracking and recognition, action recognition, interactive object learning, etc. We show how different kinds of system behavior are realized using the Active Memory Infrastructure that provides the technical basis for distributed computation and a data- and eventdriven integration approach

    Modeling Software Systems in Experimental Robotics for Improved Reproducibility -- A Case Study with the iCub Humanoid Robot

    Get PDF
    Lier F, Wachsmuth S, Wrede S. Modeling Software Systems in Experimental Robotics for Improved Reproducibility -- A Case Study with the iCub Humanoid Robot. Presented at the IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain

    Mensch-Maschine-Interaktion

    Get PDF
    Wachsmuth I. Mensch-Maschine-Interaktion. In: Stephan A, Walter S, eds. Handbuch Kognitionswissenschaft. Stuttgart Weimar: J.B. Metzler; 2013: 361-364
    corecore